97 research outputs found

    Noise Simulations For an Inverse-Geometry Volumetric CT System

    Get PDF
    This paper examines the noise performance of an inverse-geometry volumetric CT (IGCT) scanner through simulations. The IGCT system uses a large area scanned source and a smaller array of detectors to rapidly acquire volumetric data with negligible cone-beam artifacts. The first investigation compares the photon efficiency of the IGCT geometry to a 2D parallel ray system. The second investigation models the photon output of the IGCT source and calculates the expected noise. For the photon efficiency investigation. the same total number of photons was modeled in an IGCT acquisition and a comparable multi-slice 2D parallel ray acquisition. For both cases noise projections were simulated and the central axial slice reconstructed. In the second study. to investigate the noise in an IGCT system, the expected x-ray photon flux was modeled and projections simulated through ellipsoid phantoms. All simulations were compared to theoretical predictions. The results of the photon efficiency simulations verify that the IGCT geometry is as efficient in photon utilization as a 2D parallel ray geometry. For a 10 cm diameter 4 cm thick ellipsoid water phantom and for reasonable system parameters, the calculated standard deviation was approximately 15 HU at the center of the ellipsoid. For the same size phantom with maximum attenuation equivalent to 30 cm of water, the calculated noise was approximately 131 HU. The theoretical noise predictions for these objects were 15 HU and 112 HU respectively. These results predict acceptable noise levels for a system with a 0.16 second scan time and 12 lp/cm isotropic resolution

    Three-Dimensional Reconstruction Algorithm for a Reverse-Geometry Volumetric CT System With a Large-Array Scanned Source

    Get PDF
    We have proposed a CT system design to rapidly produce volumetric images with negligible cone beam artifacts. The investigated system uses a large array scanned source with a smaller array of fast detectors. The x-ray source is electronically steered across a 2D target every few milliseconds as the system rotates. The proposed reconstruction algorithm for this system is a modified 3D filtered backprojection method. The data are rebinned into 2D parallel ray projections, most of which are tilted with respect to the axis of rotation. Each projection is filtered with a 2D kernel and backprojected onto the desired image matrix. To ensure adequate spatial resolution and low artifact level, we rebin the data onto an array that has sufficiently fine spatial and angular sampling. Due to finite sampling in the real system, some of the rebinned projections will be sparse, but we hypothesize that the large number of views will compensate for the data missing in a particular view. Preliminary results using simulated data with the expected discrete sampling of the source and detector arrays suggest that high resolution

    Tenfold your photons -- a physically-sound approach to filtering-based variance reduction of Monte-Carlo-simulated dose distributions

    Full text link
    X-ray dose constantly gains interest in the interventional suite. With dose being generally difficult to monitor reliably, fast computational methods are desirable. A major drawback of the gold standard based on Monte Carlo (MC) methods is its computational complexity. Besides common variance reduction techniques, filter approaches are often applied to achieve conclusive results within a fraction of time. Inspired by these methods, we propose a novel approach. We down-sample the target volume based on the fraction of mass, simulate the imaging situation, and then revert the down-sampling. To this end, the dose is weighted by the mass energy absorption, up-sampled, and distributed using a guided filter. Eventually, the weighting is inverted resulting in accurate high resolution dose distributions. The approach has the potential to considerably speed-up MC simulations since less photons and boundary checks are necessary. First experiments substantiate these assumptions. We achieve a median accuracy of 96.7 % to 97.4 % of the dose estimation with the proposed method and a down-sampling factor of 8 and 4, respectively. While maintaining a high accuracy, the proposed method provides for a tenfold speed-up. The overall findings suggest the conclusion that the proposed method has the potential to allow for further efficiency.Comment: 6 pages, 3 figures, Bildverarbeitung f\"ur die Medizin 202

    An Efficient Estimation Method for Reducing the Axial Intensity Drop in Circular Cone-Beam CT

    Get PDF
    Reconstruction algorithms for circular cone-beam (CB) scans have been extensively studied in the literature. Since insufficient data are measured, an exact reconstruction is impossible for such a geometry. If the reconstruction algorithm assumes zeros for the missing data, such as the standard FDK algorithm, a major type of resulting CB artifacts is the intensity drop along the axial direction. Many algorithms have been proposed to improve image quality when faced with this problem of data missing; however, development of an effective and computationally efficient algorithm remains a major challenge. In this work, we propose a novel method for estimating the unmeasured data and reducing the intensity drop artifacts. Each CB projection is analyzed in the Radon space via Grangeat's first derivative. Assuming the CB projection is taken from a parallel beam geometry, we extract those data that reside in the unmeasured region of the Radon space. These data are then used as in a parallel beam geometry to calculate a correction term, which is added together with Hu's correction term to the FDK result to form a final reconstruction. More approximations are then made on the calculation of the additional term, and the final formula is implemented very efficiently. The algorithm performance is evaluated using computer simulations on analytical phantoms. The reconstruction comparison with results using other existing algorithms shows that the proposed algorithm achieves a superior performance on the reduction of axial intensity drop artifacts with a high computation efficiency

    Precision Learning: Towards Use of Known Operators in Neural Networks

    Full text link
    In this paper, we consider the use of prior knowledge within neural networks. In particular, we investigate the effect of a known transform within the mapping from input data space to the output domain. We demonstrate that use of known transforms is able to change maximal error bounds. In order to explore the effect further, we consider the problem of X-ray material decomposition as an example to incorporate additional prior knowledge. We demonstrate that inclusion of a non-linear function known from the physical properties of the system is able to reduce prediction errors therewith improving prediction quality from SSIM values of 0.54 to 0.88. This approach is applicable to a wide set of applications in physics and signal processing that provide prior knowledge on such transforms. Also maximal error estimation and network understanding could be facilitated within the context of precision learning.Comment: accepted on ICPR 201

    Geometry Analysis of an Inverse-Geometry Volumetric CT System With Multiple Detector Arrays

    Get PDF
    An inverse-geometry volumetric CT (IGCT) system for imaging in a single fast rotation without cone-beam artifacts is being developed. It employs a large scanned source array and a smaller detector array. For a single-source/single-detector implementation, the FOV is limited to a fraction of the source size. Here we explore options to increase the FOV without increasing the source size by using multiple detectors spaced apart laterally to increase the range of radial distances sampled. We also look at multiple source array systems for faster scans. To properly reconstruct the FOV, Radon space must be sufficiently covered and sampled in a uniform manner. Optimal placement of the detectors relative to the source was determined analytically given system constraints (5cm detector width, 25cm source width, 45cm source-to-isocenter distance). For a 1x3 system (three detectors per source) detector spacing (DS) was 18deg and source-to-detector distances (SDD) were 113, 100 and 113cm to provide optimum Radon sampling and a FOV of 44cm. For multiple-source systems, maximum angular spacing between sources cannot exceed 125deg since detectors corresponding to one source cannot be occluded by a second source. Therefore, for 2x3 and 3x3 systems using the above DS and SDD, optimum spacing between sources is 115deg and 61deg respectively, requiring minimum scan rotations of 115deg and 107deg. Also, a 3x3 system can be much faster for full 360deg dataset scans than a 2x3 system (120deg vs. 245deg). We found that a significantly increased FOV can be achieved while maintaining uniform radial sampling as well as a substantial reduction in scan time using several different geometries. Further multi-parameter optimization is underway

    Deconvolution-Based CT and MR Brain Perfusion Measurement: Theoretical Model Revisited and Practical Implementation Details

    Get PDF
    Deconvolution-based analysis of CT and MR brain perfusion data is widely used in clinical practice and it is still a topic of ongoing research activities. In this paper, we present a comprehensive derivation and explanation of the underlying physiological model for intravascular tracer systems. We also discuss practical details that are needed to properly implement algorithms for perfusion analysis. Our description of the practical computer implementation is focused on the most frequently employed algebraic deconvolution methods based on the singular value decomposition. In particular, we further discuss the need for regularization in order to obtain physiologically reasonable results. We include an overview of relevant preprocessing steps and provide numerous references to the literature. We cover both CT and MR brain perfusion imaging in this paper because they share many common aspects. The combination of both the theoretical as well as the practical aspects of perfusion analysis explicitly emphasizes the simplifications to the underlying physiological model that are necessary in order to apply it to measured data acquired with current CT and MR scanners

    Effects of Tissue Material Properties on X-Ray Image, Scatter and Patient Dose Determined using Monte Carlo Simulations

    Full text link
    With increasing patient and staff X-ray radiation awareness, many efforts have been made to develop accurate patient dose estimation methods. To date, Monte Carlo (MC) simulations are considered golden standard to simulate the interaction of X-ray radiation with matter. However, sensitivity of MC simulation results to variations in the experimental or clinical setup of image guided interventional procedures are only limited studied. In particular, the impact of patient material compositions is poorly investigated. This is mainly due to the fact, that these methods are commonly validated in phantom studies utilizing a single anthropomorphic phantom. In this study, we therefore investigate the impact of patient material parameters mapping on the outcome of MC X-ray dose simulations. A computation phantom geometry is constructed and three different commonly used material composition mappings are applied. We used the MC toolkit Geant4 to simulate X-ray radiation in an interventional setup and compared the differences in dose deposition, scatter distributions and resulting X-ray images. The evaluation shows a discrepancy between different material composition mapping up to 20 % concerning directly irradiated organs. These results highlight the need for standardization of material composition mapping for MC simulations in a clinical setup.Comment: 6 pages, 4 figures, Bildverarbeitung f\"ur die Medizin 201
    corecore